Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
A Multi-modal Sentiment Recognition Method Based on Multi-task Learning
LIN Zijie, LONG Yunfei, DU Jiachen, XU Ruifeng
Acta Scientiarum Naturalium Universitatis Pekinensis    2021, 57 (1): 7-15.   DOI: 10.13209/j.0479-8023.2020.085
Abstract1741)   HTML    PDF(pc) (3483KB)(522)       Save
In order to learn more emotionally inclined video and speech representations through auxiliary tasks, and improve the effect of multi-modal fusion, this paper proposes a multi-modal sentiment recognition method based on multi-task learning. A multimodal sharing layer is used to learn the sentiment information of the visual and acoustic modes. The experiment on MOSI and MOSEI data sets shows that adding two auxiliary single-modal sentiment recognition tasks can learn more effective single-modal sentiment representations, and improve the accuracy of sentiment recognition by 0.8% and 2.5% respectively.
Related Articles | Metrics | Comments0
Similar Spatial Textual Objects Retrieval Strategy
GU Yanhui, WANG Daosheng, WANG Yonggen, LONG Yunfei, JIANG Suoliang, ZHOU Junsheng, QU Weiguang
Acta Scientiarum Naturalium Universitatis Pekinensis    2016, 52 (1): 120-126.   DOI: 10.13209/j.0479-8023.2016.008
Abstract852)   HTML    PDF(pc) (469KB)(1177)       Save

Based on the efficiency and effectiveness issue of traditional simiar spatial textual objects retrieval, a semantic aware strategy which can effectively and efficiently retrieve the top-k similar spatial textal objects is proposed. The efficient retrieval strategy which is based on spatial textual objects is built on a common framework of spatial object retrieval, and it can satisfy the efficiency and effectiveness issues of users. Extensive experimental evaluation demonstrates that the performance of the proposed method outperforms the state-of-the-art approach.

Related Articles | Metrics | Comments0